4 research outputs found
ZKROWNN: Zero Knowledge Right of Ownership for Neural Networks
Training contemporary AI models requires investment in procuring learning
data and computing resources, making the models intellectual property of the
owners. Popular model watermarking solutions rely on key input triggers for
detection; the keys have to be kept private to prevent discovery, forging, and
removal of the hidden signatures. We present ZKROWNN, the first automated
end-to-end framework utilizing Zero-Knowledge Proofs (ZKP) that enable an
entity to validate their ownership of a model, while preserving the privacy of
the watermarks. ZKROWNN permits a third party client to verify model ownership
in less than a second, requiring as little as a few KBs of communication.Comment: Published and presented at DAC 202
zPROBE: Zero Peek Robustness Checks for Federated Learning
Privacy-preserving federated learning allows multiple users to jointly train
a model with coordination of a central server. The server only learns the final
aggregation result, thus the users' (private) training data is not leaked from
the individual model updates. However, keeping the individual updates private
allows malicious users to perform Byzantine attacks and degrade the accuracy
without being detected. Best existing defenses against Byzantine workers rely
on robust rank-based statistics, e.g., median, to find malicious updates.
However, implementing privacy-preserving rank-based statistics is nontrivial
and not scalable in the secure domain, as it requires sorting all individual
updates. We establish the first private robustness check that uses high break
point rank-based statistics on aggregated model updates. By exploiting
randomized clustering, we significantly improve the scalability of our defense
without compromising privacy. We leverage our statistical bounds in
zero-knowledge proofs to detect and remove malicious updates without revealing
the private user updates. Our novel framework, zPROBE, enables Byzantine
resilient and secure federated learning. Empirical evaluations demonstrate that
zPROBE provides a low overhead solution to defend against state-of-the-art
Byzantine attacks while preserving privacy.Comment: ICCV 202
Tailor: Altering Skip Connections for Resource-Efficient Inference
Deep neural networks use skip connections to improve training convergence.
However, these skip connections are costly in hardware, requiring extra buffers
and increasing on- and off-chip memory utilization and bandwidth requirements.
In this paper, we show that skip connections can be optimized for hardware when
tackled with a hardware-software codesign approach. We argue that while a
network's skip connections are needed for the network to learn, they can later
be removed or shortened to provide a more hardware efficient implementation
with minimal to no accuracy loss. We introduce Tailor, a codesign tool whose
hardware-aware training algorithm gradually removes or shortens a fully trained
network's skip connections to lower their hardware cost. Tailor improves
resource utilization by up to 34% for BRAMs, 13% for FFs, and 16% for LUTs for
on-chip, dataflow-style architectures. Tailor increases performance by 30% and
reduces memory bandwidth by 45% for a 2D processing element array architecture
Recommended from our members
Tailor: Altering Skip Connections for Resource-Efficient Inference
Deep neural networks use skip connections to improve training convergence. However, these skip connections are costly in hardware, requiring extra buffers and increasing on- and off-chip memory utilization and bandwidth requirements. In this paper, we show that skip connections can be optimized for hardware when tackled with a hardware-software codesign approach. We argue that while a network’s skip connections are needed for the network to learn, they can later be removed or shortened to provide a more hardware efficient implementation with minimal to no accuracy loss. We introduce
Tailor
, a codesign tool whose hardware-aware training algorithm gradually removes or shortens a fully trained network’s skip connections to lower their hardware cost.
Tailor
improves resource utilization by up to 34% for BRAMs, 13% for FFs, and 16% for LUTs for on-chip, dataflow-style architectures.
Tailor
increases performance by 30% and reduces memory bandwidth by 45% for a 2D processing element array architecture